by George Rasley
Fans of the Terminator series of science fiction action movies will recall that the Terminators were the product of an Artificial Intelligence defense system that turned against humans and decided to wipe out its creators.
But what if Skynet had been programmed so that it only learned to attack white people, or Christians, or conservatives? The movie and the results of that dystopian vision of the future would have been a lot different.
But not so different from what Jen Gennai, Head of Responsible Innovation at Google Global Affairs had in mind for Google’s Artificial Intelligence when she told an undercover reporter for Project Veritas:
The reason we launched our A.I. principles is that people were not putting that line in the sand, that they were not saying what’s fair and what’s equitable so we’re like, well we are a big company, we’re going to say it…
…my definition of fairness and bias specifically talks about historically marginalized communities. That’s what I care about. Communities who are in power and have traditionally been in power are NOT who I’m solving fairness for. That’s not what I set up for *inaudible* people to address.
Our definition of fairness is one of those things that we thought would be like, obvious, and everyone would agree to. There was, the same people who voted for the current president who does not agree with our definition of fairness.
Jen Gennai is the head of “Responsible Innovation” at Google Global Affairs. She determines policy and ethics for machine learning or artificial intelligence. What we’ve learned from the latest report from Project Veritas is that AI is increasingly what Google Search is all about.
And Google’s machines are not learning to be nice to conservatives, Christians, white people and anyone else Google’s “woke” staff decide they don’t like.
Project Veritas also received a trove of confidential documents from within Google. One document is about algorithmic unfairness. It reads “for example, imagine that a Google image query for CEOs shows predominantly men… even if it were a factually accurate representation of the world, it would be algorithmic unfairness.” Gaurav Gite, a Google software engineer verified the thesis of document.
The brave Google insider who came forward to Project Veritas explained the Google concept of “fairness.”
Google Insider: “What I found at Google related to fairness was a machine learning algorithm called ML Fairness, ML standing for Machine Learning, and fairness meaning whatever it is they want to define as fair. You could actually think of fairness as unfair because it’s taking as input the clicks that people are making and then figuring out which signals are being generated from those clicks, and which signals it wants to amplify and then also dampen.”
Google Insider: “So what they want to do is they want to act as gatekeepers between the user and the content that they’re trying to access. So they’re going to come in and they’re going to filter the content, and they’re going to say, ‘Actually we don’t want to give the user to that information because it’s going to create an outcome that’s undesirable to us’.”
In response to questions posed by Project Veritas founder James O’Keefe, the Google insider explained how the search giant, which also owns video platform YouTube, applies the same kind of Leftwing “social justice” standards to censor YouTube videos posted by conservative content creators.
Google Insider: So the way that Google is able to target people is that they take videos, and then they do a transliteration through using artificial intelligence. And they look at the translated text of what those people are saying and then they assign certain categories to them like a right winger or news talk, and then they’re able to take those, and apply their algorithmic re-biasing unfairness algorithms to them so that their content is suppressed across the platform.
Google Insider: So they’re playing narrative control. And what they’re doing it is they’re applying their human, the human component, which is they’re going through – with an army people – and they are manually intervening, and removing your content from, from their servers, and they are saying that the algorithms did it. And in that case for the high profile people, it’s not just ML Fairness that you guys have to worry about, it’s actual people that have their head filled with this SJW mindset, they’re going through and removing the content because of it – because they don’t agree with it.
As he wrapped up the first part of the interview James O’Keefe asked the Google whistleblower is he was afraid and his answer is one we should all be brave and principled enough to follow.
Google Insider: I am afraid. I was more afraid. But, I, I had a lot of difficulty with the concept of, you know, my life ending because of this, but I, I imagine what the other world would look like and it’s not a place I’d want to live in. Hopefully, I get away with it, and nothing bad happens, but bad things can happen. I mean, this is a behemoth, this is a Goliath, I am but a David trying to say that the emperor has no clothes. And, um, being a small little and I can be crushed, and I am aware of that. But, this is something that is bigger than me, this is something that needs to be said to the American public.
– – –
Photo “Google Censorship” by Mike Mackenzie. CC BY 2.0.